Aperture Neuro
● Organization for Human Brain Mapping
Preprints posted in the last 90 days, ranked by how well they match Aperture Neuro's content profile, based on 18 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
Alizadeh, M.; Chatelain, Y.; Kiar, G.; Glatard, T.
Show abstract
Network neuroscience provides a powerful framework for studying the mechanisms underlying brain-related diseases. As analyses become increasingly computational, ensuring their numerical reliability has become a critical challenge. Small perturbations introduced during processing can propagate through complex pipelines, leading to variability in outcomes and raising concerns about the reproducibility of reported findings. Addressing this issue requires systematic evaluation of pipeline stability to ensure results remain within acceptable numerical limits. While the numerical variability of structural imaging workflows has been investigated, with findings ranging from negligible to substantial, functional MRI (fMRI) pipelines and their derived graph measures remain underexplored. Without rigorous stability assessment, conclusions drawn from these measures may remain uncertain. We systematically evaluated the numerical variability of graph measures of functional connectivity derived from the widely-used fMRIPrep pipeline and compared it to population variability. The resulting Numerical-Population Variability Ratio (NPVR) values typically ranged from 0.1 to 0.2 for most graph metrics, indicating a measurable influence of numerical variability on network-derived outcomes. NPVR values varied across brain regions, thresholding choices, and confound regression strategies. These findings highlight numerical variability as an important factor in functional network studies, particularly when examining subtle effects or working with small sample sizes.
Radosavljevic, L.; Smith, S.; Nichols, T. E.
Show abstract
The UK Biobank (UKB) Brain Imaging cohort contains data from almost 100,000 subjects and has yielded invaluable understanding of the links between the brain and health outcomes and lifestyles. Much of the understanding of these links has come from exploring the association between Imaging Derived Phenotypes (IDPs) and other variables that are unrelated to brain imaging, so called non-Imaging Derived Phenotypes (nIDPs). When performing analysis of this kind, it is very important to control for well known confounding factors such as age, sex and socio-economic status, as well as confounds which are related to the imaging protocol itself. In previous work, we created a pipeline for constructing imaging confounds for use in statistical inference via a standard multivariate linear regression approach (Alfaro-Almagro et. al. 2021). However, this approach is problematic when the number of confounds exceeds the number of subjects, and is severely underpowered when the number of number of subjects is not much larger than the number of confounds. In this work, we perform a simulation study to evaluate 13 modelling approaches to account for confounds when their number is similar to or exceeds the number of subjects. Based on the simulation results, we recommend a ridge regression based permutation test for low sample sizes (n [≤] 50), a version of de-sparsified LASSO for intermediate sample sizes (50 < n [≤] 500), and multivariate linear regression aided by Principal Component Analysis (PCA) for larger sample sizes (n > 500). We also demonstrate the use of our recommended methodology on a real data example of finding associations between Alzheimers Disease (AD) and IDPs.
Capiglioni, M.; Tabarelli, D.; Tambalo, S.; Turco, F.; Wiest, R.; Jovicich, J.
Show abstract
IntroductionConventional BOLD-fMRI relies on hemodynamic responses that are temporally and spatially indirect markers of neural activity. Developing alternative contrasts, sensitive to neuroelectrical phenomena, is a critical challenge in brain imaging. Spin-lock (SL) fMRI has shown promise in phantom studies for detecting magnetic field changes associated with neuronal activity, but its in-vivo sensitivity and practicality remain unclear. This study evaluated whether SL contrast can effectively detect and localize human neuronal activation, benchmarked against complementary functional modalities, magnetoencephalography (MEG) and 3T BOLD-fMRI, to assess the sensitivity of MR-based neuronal current imaging. MethodsThirteen healthy young volunteers underwent SL-based imaging during 8 Hz visual stimulation, along with BOLD and MEG acquisitions. Subjects viewed quadrant-checkerboard stimuli to elicit localized cortical responses. Two balanced SL contrast mechanisms, rotary excitation (REX) and stimulus-induced rotary saturation (SIRS), were employed. Postprocessing targeted stimulus-locked signal fluctuations using a regression-filtering-rectification strategy. Phantom experiments tested sensitivity and analysis pipeline performance. ResultsMEG revealed robust stimulus-locked responses in occipital cortex, with estimated local magnetic field amplitudes of [~]0.07 nT. Conventional BOLD-fMRI confirmed reliable hemodynamic activation. In contrast, neither balanced REX nor balanced SIRS produced consistent stimulus-related activation in vivo. Phantom experiments subsequently yielded detection thresholds of 0.2 nT for REX and 0.6 nT for SIRS, exceeding the MEG-estimated physiological field amplitudes. ConclusionsUnder the present experimental conditions, the tested spin-lock fMRI implementations did not achieve sufficient sensitivity for reliable in-vivo detection of neuronal magnetic fields at 3T. Phantom and MEG-based estimates indicate that physiological field amplitudes in the visual cortex lie below current detection limits. These findings establish quantitative constraints on direct neuronal current imaging with MRI and provide a benchmark for future methodological developments aimed at bridging electrophysiology and functional MRI. Key pointsO_LIWe assessed spin-lock fMRI sensitivity using combined SL-fMRI, BOLD-fMRI, MEG, and phantom measurements during visual stimulation. C_LIO_LIMEG and BOLD-fMRI confirmed robust neuronal and hemodynamic activation in the visual cortex. C_LIO_LISL-fMRI did not achieve reliable in-vivo detection of neuronal magnetic fields; phantom sensitivity limits exceeded MEG-estimated physiological field amplitudes. C_LI
Rangaprakash, D.; Barry, R. L.
Show abstract
Over the past two decades, open-source research software such as SPM, AFNI and FSL formed the substrate for advancements in the brain functional magnetic resonance imaging (fMRI) field. The spinal cord fMRI field has matured substantially over the past decade, yet there is limited research software tailored for processing cord fMRI data that has distinct noise sources, unique challenges, niche processing requirements and special needs. Spinal cord fMRI data analysis is a different beast, involving specialized pre- and post-processing steps due to the cords unique anatomy and higher distortions/physiological noise, thus requiring extensive and careful quality assessment. Building upon 10+ years of research and development, we present Neptune - a user-interface-based MATLAB toolbox. With 30,000+ lines of in-house code, it is designed to be easy to use and does not require programming knowledge. Neptune builds on our previously published 15-step pre-processing pipeline (Barry et al., 2016) and presents a 19-step pipeline with new processing steps, and enhancements to existing steps. Neptune has a 4-step post-processing pipeline aimed at fMRI connectivity modeling. It generates extensive and novel quality control visuals to enable a thorough assessment of data quality, and displays them in an elegant webpage format. We demonstrate the utility of Neptune on our 7T data. Certain features of the popular Spinal Cord Toolbox (SCT) are integrated into Neptune, and users can import/export between Neptune and other software such as FSL and SPM. The availability of this open-source, easy-to-use software will benefit the spinal cord fMRI community, and also tip the cost-benefit balance for brain fMRI researchers to invest in learning new software to conduct important neuroscientific and clinical research using spinal cord fMRI.
Gonzalez-Castillo, J.; Caballero Gaudes, C.; Handwerker, D. A.; Bandettini, P. A.
Show abstract
Consistent, high-quality data is key to the success of fMRI studies given the many confounding factors and undesired signals that contaminate these data. Several quality assurance (QA) metrics exist for fMRI (e.g., temporal signal-to-noise ratio (TSNR), percent ghosting, motion estimates), but none of them leverage relationships between echoes that are part of multi-echo (ME) fMRI acquisitions. Here, we fill this gap by proposing a new QA metric for for ME-fMRI that quantifies the likelihood a given ME scan is dominated by BOLD (Blood Oxygenation Level-Dependent) fluctuations. We refer to this metric as pBOLD; the probability of the signal change being primarily BOLD contrast-dominated. Having an estimate of overall BOLD weighting - both before and after preprocessing - is meaningful because BOLD is the intrinsic contrast mechanism used in fMRI to infer neural activity. We introduce pBOLD to the neuroimaging community by first describing the theoretical principles supporting the metric. Next, we validate pBOLD efficacy using a small dataset (N=7 scans) of constant- and cardiac-gated scans that have distinct levels of contributing BOLD fluctuations. Third, we apply pBOLD to a larger publicly available ME dataset (N=439 scans), to evaluate six different pre-processing pipelines, and show how pBOLD provides complementary information to TSNR. Our results show that ME-based denoising increases both pBOLD and TSNR relative to basic denoising; however, including the global signal (GS) as a regressor only improves TSNR, but worsens pBOLD. Further analyses looking at the BOLD-like characteristics of the GS and its relationship to cardiac and respiratory traces suggest that the observed decrease in pBOLD is likely due to a decrease in BOLD fluctuations of neural origin contributing to the GS, and not due to contributions from other physiological BOLD fluctuations (i.e., respiratory and cardiac function). Finally, we also demonstrate how pBOLD can be applied as a data quality metric, by showing how higher pBOLD results in better ability to predict phenotypes based on whole-brain functional connectivity matrices.
Chen, Y.-A. A.; Kasper, L.; Chow, C. T.; Kuo, Y.; Boutet, A.; Germann, J.; Lozano, A. M.; Uludag, K.; Diaconescu, A. O.; Kashyap, S.
Show abstract
Accurate registration of regions of interest (ROIs) from standard atlases to participants native spaces is a critical step in fMRI studies, as it directly affects the reliability of sampled BOLD signals. While T1-weighted (T1w) image-based ROI registration is well validated and widely adopted in cortical fMRI, its performance degrades in brainstem studies due to the small size, dense packing, and poor visibility of brainstem nuclei on T1w contrast. We hypothesized that incorporating diffusion MR images, containing more information about internal brainstem architecture, should improve ROI registration accuracy. To test this, we developed four registration pipelines that either included or excluded diffusion-based alignment components and evaluated their performance using data from n=20 healthy participants. Registration accuracy was assessed using Dice coefficient for the red nucleus (RN) and the substantia nigra (SN), and mis-registration fraction--a metric developed for nuclei that cannot be manually delineated--for the dorsal raphe nucleus (DRN). The results showed that diffusion-based pipelines, using fractional anisotropy (FA) images, non-diffusion-weighted (b0) images, and multivariate combination, outperformed the T1w-only baseline. Probabilistic maps derived from inverse-transformed native ROIs further supported improved sensitivity to inter-individual anatomical variability in the diffusion-augmented pipelines. In addition, analysis of gradient magnitude maps from the Jacobian determinants revealed associations between localized deformation and image modality-specific landmarks. These findings demonstrate the potential of diffusion-augmented pipelines for improving brainstem ROI registration, which could enhance the robustness of fMRI studies on brainstem disorders characterized by functional dysregulation.
Demsar, J.; Kraljic, A.; Matkovic, A.; Brege, S.; Pan, L.; Tamayo, Z.; Fonteneau, C.; Helmer, M.; Ji, J. L.; Anticevic, A.; Korponay, C.; Salavrakos, M.; Glasser, M. F.; Nickerson, L. D.; Cho, Y. T.; Repovs, G.
Show abstract
Preprocessing and analysis of neuroimaging data are technically demanding, often requiring a combination of multiple software tools, modality-specific pipelines, and extensive parameter tuning to match dataset characteristics. These complexities make it difficult to document workflows in sufficient detail to ensure complete transparency and reproducibility. To address these challenges, we introduce QuNex recipes, a framework for defining and executing complete neuroimaging workflows - encompassing data onboarding, preprocessing, and analysis - in a transparent, machine- and human-readable format. Recipes are implemented as an integrated feature of the Quantitative Neuroimaging Environment & Toolbox (QuNex), a containerized, open-source platform for end-to-end multimodal and multi-species neuroimaging processing. The recipes framework enables seamless integration of QuNex commands with custom scripts and external tools, capturing every processing step and parameter setting. A fully reproducible study can thus be shared and replicated by providing only (a) the QuNex version used, (b) the recipe file, and (c) the data. This approach standardizes workflow specification, enhances transparency, and enables one-command replication of complex neuroimaging analyses. By providing a standardized way to describe and share workflows, recipes facilitate open exchange of best practices and reproducible methods within the neuroimaging community.
Xiong, Y.; Burke, M.; Melo, L.; Takahashi, K.; Lueckel, M.; Bergmann, T. O.; Nitsche, M. A.; Genc, E.; Chiappini, E.
Show abstract
IntroductionConcurrent TMS-fMRI allows to observe how stimulation affects the target region and connected brain-wide networks. However, hardware limitations represent a major constraint: standard MR head coils provide high imaging sensitivity but no room for the TMS coil, whereas available TMS-compatible MR head coils offer access but low or strongly inhomogeneous signal. MethodsWe developed and benchmarked a flexible "Sushi" MR-receive setup, assembled from two repurposed 18-channel body arrays, that allows TMS coil positioning while maintaining full-brain coverage. Resting-state (n = 12) and task-based working memory (n = 8) fMRI were acquired with the Sushi coil, a commercially available 2x7-channel Surface-coil setup, and a standard Siemens 64-channel (non-TMS-capable) head coil. The image-quality cost of TMS capability was addressed by acquiring multi-echo fMRI to allow post-hoc optimization of signal-to-noise ratio (SNR), but no TMS was delivered. ResultsFrom resting-state fMRI, the Sushi mapped known canonical resting-state networks (RSNs) comparably to the 64-channel reference and was superior to the Surface coils, in particular for the default-mode, auditory, and visual networks. Task-fMRI data showed that Sushi recovered the working memory network more similarly to the 64-channel reference than the Surface coil. Temporal SNR was optimized for all coil acquisitions yielding [~]30-50% gain and improving between-coil comparability for RSNs identification and task-related activity. ConclusionsTogether, post-hoc multi-echo optimization and the Sushi coil setup provide a low-cost, ready-to-use solution for whole-brain concurrent TMS-fMRI recordings. Combining stimulation access with reliable functional network readouts is essential to probe mechanisms of TMS and to inform effective TMS interventions targeting altered brain systems.
Fritz, F. J.; Streubel, T.; Mordhorst, L.; Luethi, N.; Edwards, L. J.; Mushumba, H.; Pueschel, K.; Weiskopf, N.; Kirilina, E.; Mohammadi, S.
Show abstract
Post mortem MRI studies of formalin-fixed brain tissue are essential for linking in vivo MRI contrast to underlying microstructure measured with ex vivo histology, yet formalin not only preserves tissue but also systematically alters MRI-relevant physical properties. To systematically quantify and model these effects, we longitudinally characterized multi-parametric mapping (MPM) measures -- longitudinal (R1) and effective transverse (R2*) relaxation rates, proton density proxy (NA), and magnetization transfer saturation ratio (MTsat) -- across the different post mortem processes, i.e. autolysis, fixation, and hydration. Five whole-human brains were scanned longitudinally during fixation (and in situ-after rehydration, when available), and compared with an independent in vivo cohort of 25 younger healthy participants. Each MPM parameter followed a distinct trajectory across different post mortem processes. The largest changes were found for R1 during fixation relative to in situ values (more than 250%), followed by R2* with an almost 60% increase, and MTsat with a 26% reduction from in vivo to in situ. NA showed no detectable change during fixation. We developed models describing fixation-induced changes and tissue shrinkage. The R1 changes and tissue shrinkage were closely aligned, reflecting a likely common mechanism. MTsat largely preserved tissue contrast during fixation and rehydration, supporting its use for spatial alignment between in vivo MRI, fixed-tissue MRI, and histology. With our quantitative assessment of post mortem process-dependent changes we provide a unique resource for future studies to better link in vivo to fixed post mortem MRI data and thereby bridge the gap to ex vivo histology.
Sanchez, T.; Mihailov, A.; Marti-Juan, G.; Girard, N.; Manchon, A.; Milh, M.; Eixarch, E.; Dunet, V.; Koob, M.; Pomar, L.; Sichitiu, J.; Gonzalez Ballester, M. A.; Camara, O.; Piella, G.; Bach Cuadra, M.; Auzias, G.
Show abstract
Normative modeling is increasingly used to characterize typical growth trajectories and identify atypical neurodevelopment, including early brain development using magnetic resonance imaging (MRI) acquired before birth. Recent work has emphasized the importance of large sample sizes for accurate and robust centile estimation. In this study, we investigate how image quality influences fetal brain normative models, a critical factor in this context where MRI is acquired on a moving fetus in utero. Using a multi-centric cohort of 635 fetal MRI scans, we applied a standardized visual quality control (QC) protocol with continuous quality ratings. We fit normative models for multiple brain structures under progressively relaxed QC stringency, and quantified the deviations in centile estimates relative to a high-quality reference subgroup. Our results showed that including lower-quality data systematically biased normative centiles, with the strongest effects observed in the outer centiles, particularly the lower tail (1st-10th). Bias increased progressively as QC stringency was relaxed and could not be attributed solely to the number of subjects used to fit the models. Quality-induced bias was structure-dependent, and often not visually apparent at the segmentation level. These findings highlight that image quality is an important source of bias in normative fetal brain modeling, and that increasing sample size at the expense of quality may systematically affect centile estimates, potentially jeopardizing the utility of the model.
Chakladar, S.; Pan, S.; Limbrick, O.; Pandey, M.; Halupnik, G. L.; Zhao, A.; Mahjoub, M. R.; Quirk, J. D.; Nazeri, A.; Strahle, J. M.
Show abstract
IntroductionCurrent workflows for studying hydrocephalus in rodent models rely on manual segmentation or qualitative assessment of ventricular size on small animal magnetic resonance imaging, which are both inefficient and prone to variability. Atlas-based methods enable more streamlined segmentation, but their analysis is limited to morphologically normal samples. ObjectiveThis study aimed to develop and internally validate a deep learning model that performs automated segmentation of lateral ventricles in rodent brain MRIs, allowing for 3D ventricle reconstruction, morphological analysis, and ventriculomegaly detection. MethodsFour U-Net++ neural networks, each with different encoder backbones, were trained using 307 rodent brain MRIs (262 rats, 45 mice), each with manually segmented lateral ventricles serving as the ground truth. Model performance was evaluated using the Dice coefficient, intersection over union (IoU), and Hausdorff index. The most optimal model was evaluated further for its ability to quantify ventricle volume, convexity, surface area, and symmetry. ResultsThe U-Net++ model with an EfficientNet-B1 encoder achieved high accuracy (Dice: 0.823 {+/-} 0.136; IoU: 0.721 {+/-} 0.85). Further assessment of its morphological predictions found strong correlations with manual measurements of ventricular morphology, with Pearson and interclass correlation coefficients exceeding 0.96 across all metrics. The full validated pipeline was packaged into a publicly available application, hosted at https://ava-tar.org. ConclusionThis study introduces a deep learning tool for automated segmentation and morphological analysis of lateral ventricles in rodent MRIs. The tools efficiency and accuracy in quantifying ventricle morphology offers significant utility in preclinical hydrocephalus research with potential future application in the clinical setting.
Miller, D. J.; Gratton, B.; LeBlanc, Z.; Kaas, J. H.
Show abstract
IntroductionSupervised statistical learning for cell-level segmentation and morphometry in optical microscopy is limited less by algorithmic capacity than by the scarcity of reliable, expert-validated ground truth. In comparative neuroscience and quantitative histology, where classical stains such as Nissls method remain the primary means to study cellular morphology, this bottleneck is acute: manual annotation is expensive, subject to individual bias, and rarely performed at the scale or consistency that computational approaches demand. No existing platform integrates a stain-specific bioimage segmentation protocol, a structured multi-annotator workflow, and consensus-based quality control into a single pipeline from image ingestion to machine-readable training data. MethodsWe present Anatolution, an open-source, web-based platform designed to address the gap of quality annotations at https://anatolution.herokuapp.com/public-tool/. Anatolution organizes microscopy images, including 2D arrays or 3D volumes, into project workspaces where multiple annotators independently label cellular structures against a shared computer vision catalogue. This design enables systematic inter-rater and intra-rater reliability assessment, with consensus derived from agreement across annotators rather than from any single experts judgment. The platform enables the export of aggregated labels or annotation datasets for downstream statistical learning methods. We describe the systems architecture, its Nissl-specific segmentation pipeline, the consensus annotation workflow, and validation of inter-rater reliability. ConclusionAcross 20+ histological annotation containers annotated by up to 15 independent raters, consensus boundary agreement increased monotonically with annotator count, reaching a median Dice of 0.79 against the full-rater reference at seven annotators, with top-tier containers achieving leave-one-out ceiling values of 0.621-0.769 for cell-body segmentation. The segmentation pipeline provided effective spatial anchoring, with 88% of consensus-annotated polygons containing at least one algorithmically detected seed. Anatolution provides open-source infrastructure for producing consensus-validated training data from classical histological preparations, addressing the primary bottleneck limiting supervised learning for cell-level morphometry.
Arafat, B.; Nettekoven, C.; Xiang, J. D.; Diedrichsen, J.
Show abstract
Functional brain mapping is an important tool to understand the organization of the human brain, both at the group level, but also to an increasing degree at the level of the individual. There are currently two main approaches to do so. Resting-state fMRI relies on inter-regional correlations of random fluctuations of the signal. In contrast, task-based localizers typically use a single-contrast between a task of interest and a matched control task to identify the location of a functional region in an individual brain. In this paper, we propose and evaluate a third approach: the use of multi-task batteries for both localization of a single functional region and parcellation of multiple functional regions. We show that multi-task localizers produce more consistent estimation of a single functional region across subjects than the single-contrast approach using the same amount of fMRI data. Furthermore, we demonstrate that the multi-task approach is sensitive to true inter-individual differences in region size, and does not suffer the same influence of signal-to-noise ratio that biases the single-contrast localizer. We then address the question of how to select tasks for the battery, and present a data-driven strategy that optimizes the characterization of a brain structure of interest. We show that such batteries outperform randomly selected batteries both for building individual parcellations as well as individual connectivity models. Finally, we demonstrate that an interspersed design - where all tasks are presented in each imaging run - yields more reliable results than splitting the tasks across different runs. We present an open source toolbox for the implementation of multi-task batteries, along with a library containing group-averaged activity patterns that can be used to optimize battery selection for different brain structures of interest.
Cao, S.; Shi, L.; Liu, J.; Lian, Z.; Teng, L.; Xiao, Q.; Shi, F.; Sun, K.; Xia, X.; Meng, X.; Shen, D.
Show abstract
BackgroundIn vivo whole-cortex quantification of intracortical signal-defined layering on the routinely acquired structural MRI remains limited. PurposeTo develop and validate an automated framework to reconstruct three intracortical signal-defined layers from 5T three-dimensional (3D) T2-weighted fluid-attenuated inversion recovery (FLAIR) and to characterize whole-cortex morphometrics and regional organization across a prespecified cortical organizational framework. Materials and MethodsIn this retrospective study, 5T 3D FLAIR images were acquired between February and July 2024. Brain Multi-Layer Surface Reconstruction (BrainMLSR) reconstructed three intracortical signal-defined layers, and derived intracortical layer thickness and surface area measures and ratios. Performance was evaluated against manual annotations and assessed for test-retest repeatability (n=13) and cross-site feasibility (n=2). Paired two-tailed t-tests and linear mixed-effects models were used. A proof-of-concept analysis compared Heschls gyrus ratios between 19 patients with temporal lobe epilepsy (TLE) and 19 age-matched healthy controls (HC). ResultsA total of 270 healthy participants (mean age, 54.4{+/-}14.5 years; 146 men) were included. Agreement with manual hypointense-layer annotations was high (Dice, 0.960{+/-}0.003), and was similar in the cross-site dataset (Dice, 0.954{+/-}0.009). In the test-retest dataset, average symmetric surface distance was less than 0.1 mm. Across prespecified systems, thickness and surface area ratios varied by region; within an auditory-perisylvian hierarchy, banksSTS showed a localized turning point with an increased hyperintense layer thickness ratio and decreased hypointense layer thickness ratio, accompanied by inflections in surface area ratios (P < .001). In bilateral Heschls gyrus, hypointense (left: 0.619{+/-}0.262 vs 0.881{+/-}0.102; right: 0.607{+/-}0.310 vs 0.907{+/-}0.141 mm) and isointense (left: 0.406{+/-}0.225 vs 0.678{+/-}0.128; right: 0.478{+/-}0.232 vs 0.808{+/-}0.176 mm) layer thicknesses were lower in TLE than in HC (all P<.001). ConclusionBrainMLSR enabled accurate and repeatable in vivo reconstruction of three intracortical signal-defined layers from a single 5T 3D T2-weighted FLAIR acquisition and provided whole-cortex boundary-based morphometry with interpretable regional organization. Key ResultsO_LIIn this retrospective study of 270 participants, Brain Multi-Layer Surface Reconstruction (BrainMLSR) showed high agreement with hypointense-layer annotations (Dice, 0.960{+/-}0.003). C_LIO_LIWithin an auditory-perisylvian hierarchy, the hyperintense-layer thickness ratio peaked (0.620{+/-}0.003) and the hypointense-layer ratio was the lowest (0.156{+/-}0.002) in the bank of the superior temporal sulcus (P < .001). C_LIO_LIIn a proof-of-concept analysis, patients with temporal lobe epilepsy (vs healthy controls) had higher hyperintense-layer thicknesses ratio in the left hemisphere (0.523{+/-}0.085 vs 0.417{+/-}0.036, P<.001). C_LI SummaryAn automated multiple-layer surface reconstruction framework (BrainMLSR), applied to 5T 3D T2-weighted FLAIR images, produced reproducible whole-cortex, signal-defined laminar morphometry and demonstrated coherent patterns across prespecified cortical organizational framework.
Zhu, Y.; Jiang, M.; Chen, J.; Hao, F.; Li, X.; Qi, Y.; Zhang, Y.; Peng, H.; Xie, Y.; Zhu, J.; Ma, Z.
Show abstract
Mesoscale, layer-specific functional MRI (fMRI) enables noninvasive access to cortical microcircuitry, yet widespread adoption has been constrained by a reliance on ultra-high field ([≥]7.0 T) systems and proprietary pulse sequences. To bridge this gap and enhance accessibility, we developed an open-source framework at 5.0 T for mapping laminar brain activity. This framework integrates a Pulseq-based 3D vascular space occupancy (VASO) sequence with an end-to-end data acquisition and analysis pipeline. At matched sub-millimeter resolution (0.8 mm in-plane), the Pulseq-based 3D implementation increased slab coverage by [~]1.82-fold and improved temporal signal-to-noise ratio by [~]1.50-fold relative to a vendor-provided 2D-VASO sequence. Validated using a finger-tapping paradigm, individual cerebral blood volume-weighted (VASO) laminar activation profiles consistently revealed the canonical "double-peak" pattern, with distinct superficial and deep peaks in the primary motor cortex. These profiles exhibited excellent cross-visit reliability (r = 0.80), and peak depths showed good spatial reliability (ICC = 0.69 for deep layers; ICC = 0.58 for superficial layers). Between-subject reproducibility was high (r = 0.86). Deploying the identical Pulseq protocol at an independent imaging site reproduced the characteristic double-peak laminar profiles (r = 0.63). At the group level, 5.0 T laminar profiles closely matched established 7.0 T findings, robustly resolving both deep and superficial peaks despite the lower field strength. Notably, for each participant, a single 13-minute VASO run was sufficient to resolve reliable laminar activation patterns that exhibited high consistency with multi-run averages (r = 0.78), highlighting the potential for high-throughput population studies or clinical research settings. The Pulseq-based 3D VASO sequence file, image reconstruction pipeline, and data analysis scripts are openly available to facilitate the adoption of this framework. This work establishes a practical route towards more accessible and reproducible mesoscale fMRI for studying human laminar functional architecture.
Ivanova, E.; Pollonini, L.; Soltanlou, M.
Show abstract
SignificanceSelecting the appropriate motion artefact (MA) correction method for functional near-infrared spectroscopy (fNIRS) data is quite challenging, particularly in light of the need for standardised practice, replication, and transparency in the field. A clear framework for making measurable and replicable decisions is therefore essential. AimThis paper proposes a guide based on an open-source data quality assessment tool (QT-NIRS) that enables a transparent and evidence-driven choice of MA correction method. ApproachWe present the guide in two approaches: a simplified version that is easy to run for beginners, and an advanced version providing more informative output at the cost of additional computations and minor changes to the original QT-NIRS code. Due to its high flexibility and within-subject nature, the method is applicable across samples with varied characteristics. ResultsWe applied the guide to two challenging datasets from 60 British preschoolers (mean age = 3.94 years, SD = 0.49) and 39 South African school children (mean age = 12.00 years, SD = 0.51). Both simplified and advanced approaches supported similar MA correction methods. ConclusionsWhile both approaches can be used interchangeably, we recommend the advanced approach when possible due to its more informative and straightforward output, and advise caution when using the simplified version.
de Boer, A. A. A.; Bayer, J. M. M.; Fraza, C.; Chavanne, A.; Rehak Buckova, B.; Tsilimparis, K.; Serin, E.; Bernas, A.; Cirstian, R.; Zabihi, M.; Rutherford, S.; Al Khaledi, A.; Wolfers, T.; Beckmann, C.; Marquand, A. F.
Show abstract
Normative Modelling ( brain growth charting) is now a well-established method for computational psychiatry and involves charting centiles of variation across a population in terms of mappings between biology and behavior, providing statistical inferences at the level of the individual. These models have helped the field to move away from case-control analysis toward individual-level analysis. Correspondingly, normative modelling has now been applied to chart brain development and ageing in many populations and has been used to quantify individual deviations across various neurological and psychiatric conditions. This has been supported by large-scale models that are openly accessible for diverse brain imaging modalities. As normative modelling continues to grow, several recent methodological developments, such as non-Gaussian models, longitudinal models, and federated learning, have been implemented in different software tools, including the Predictive Clinical Neuroscience toolkit (PCNtoolkit). In this protocol update, we provide: (i) a revised overview of this methodological landscape; (ii) an update to our 2022 standardised analytical protocol for normative modelling of neuroimaging data, including options for federated and longitudinal normative models; (iii) practical guidance suited to both novice and experienced practitioners supported by open-source code examples implemented in the refactored version of PCNtoolkit; and (iv) updated models for cortical thickness, volumetric data, diffusion-weighted imaging and longitudinal data for use by the community.
Tireli, E. D.; Larsson, H. B. W.; Vestergaard, M. B.; Cramer, S. P.; Lindberg, U.; Tireli, D.
Show abstract
We present p-Brain, an end-to-end neuroimaging analysis framework for reproducible, automated quantitative DCE-MRI analysis at scale. From standard acquisitions, p-Brain estimates baseline relaxation parameters, converts signal to gadolinium concentration, derives arterial and venous input functions using convolutional neural network (CNN) slice selection and ROI segmentation, and produces voxelwise maps with regional and whole-brain summaries. The pipeline implements Patlak graphical analysis to estimate the blood-brain barrier influx constant (Ki) and plasma volume fraction (vp), and performs model-free residue deconvolution with Tikhonov regularisation to estimate cerebral blood flow (CBF), mean transit time (MTT), and capillary transit-time heterogeneity (CTH) from the same DCE dataset. p-Brain exports analysis-ready outputs, intermediate readouts, structured runtime metadata, and stage-level quality control artifacts to support auditability in batch processing. We evaluate the framework on a technically uniform set of 97 DCE-MRI scans from 58 healthy human participants, and show close agreement between automated Patlak Ki summaries and an established reference workflow. A companion macOS desktop application supports batch execution, job monitoring, and rapid review of curves and maps. p-Brain is open-source and configurable, enabling extension to additional kinetic models.
Navarro-Gonzalez, R.; Aja-Fernandez, S.; Planchuelo-Gomez, A.; de Luis-Garcia, R.
Show abstract
Foundation models (FMs) for brain magnetic resonance imaging (MRI) are increasingly adopted as pretrained backbones for clinical tasks such as brain age prediction, disease classification, and anomaly detection. However, if FM embeddings (internal representations) shift systematically across MRI scanners, downstream analyses built on them may reflect acquisition hardware rather than biology. No study has yet quantified this cross-scanner reproducibility. Here, we assess the cross-scanner reliability of brain MRI FM embeddings and investigate which design factors (pretraining strategy, network architecture, embedding dimensionality, and pretraining dataset scale) best explain the observed differences. Using the ON-Harmony travelling-heads dataset (20 participants, eight scanners, three vendors), we evaluate the embeddings of five architecturally diverse FMs and a FreeSurfer morphometric baseline via within- and between-scanner intraclass correlation coefficient (ICC), variance decomposition, and scanner fingerprinting. Reliability spanned the full spectrum: biology-guided models achieved good-to-excellent cross-scanner ICC (AnatCL: 0.970 [95\% confidence interval (CI): 0.94, 0.98]; y-Aware: 0.809 [0.63, 0.88]), matching or surpassing FreeSurfer (0.926 [0.83, 0.96]), whereas purely self-supervised models fell below the poor threshold (BrainIAC: 0.453, BrainSegFounder: 0.307, 3D-Neuro-SimCLR: 0.247), with 23--58\% of embedding variance attributable to scanner identity. The strongest correlate of cross-scanner reliability among the models evaluated was pretraining strategy: incorporating biological metadata (cortical morphometrics, age) into the contrastive objective produced scanner-robust embeddings, whereas architecture, dimensionality, and dataset scale did not predict reliability.
Uus, A.; Hall, M.; Bradshaw, C.; Kumar, A.; Aviles Verdera, J.; Neves Silva, S.; Luis, A.; Waheed, H.; Alarcon Gil, P.; Cordero-Grande, L.; Matthew, J.; Kyriakopoulou, V.; Deprez, M.; Colford, K.; Egloff Collado, A.; Hajnal, J. V.; Rutherford, M.; Hutter, J.; Story, L.
Show abstract
BackgroundVolumetric assessment of the fetus, placenta and amniotic fluid is clinically valuable, but MRI volumetry is rarely performed in clinical practice because of the required labour-intensive manual segmentation of motion-corrupted 2-dimensional (2-D) stacks. Existing deep-learning approaches typically segment single structures in 2-D motion-corrupted stacks, are, however limited in accuracy by slice misalignment. No current method provides a reliable automated solution for whole-uterus volumetry in 3-D reconstructed MRI. Furthermore, normative ranges for computation of centiles are currently missing. ObjectiveTo develop an automated pipeline for whole-uterus volumetry in 3-D T2-weighted fetal MRI and to generate normative growth models for fetal, placental and amniotic fluid volumes in healthy pregnancies with confirmed delivery at term. Materials and methodsDeformable slice-to-volume 3-D reconstruction was applied to motion-corrupted T2-weighted (T2W) stacks from 0.55T-3T MRI, and a 3-D UNet was trained to segment fetus, placenta and amniotic fluid on the resulting reconstructed 3-D images. A reporting tool generates centiles, z-scores and structured HTML outputs. Automated segmentation was performed in 357 healthy control datasets from 16-41 weeks gestational age (GA) range with confirmed delivery at term. After visual checks of segmeted labels and minor refinements, GA-based quadratic normative volumetry models were derived and correlations with maternal and fetal characteristics assessed. The utility of the pipeline for clinical research was further evaluated using 95 longitudinal scans from 42 fetuses and 86 preterm ([≤] 32 weeks at delivery) pregnancies. ResultsAutomated segmentation produced accurate 3-D labels, with only small local corrections (< 1% volume difference) required in the control cohort(< 25% of the datasets). Fetal and placental volumes increased across gestation, while amniotic fluid volume peaked mid-pregnancy and declined toward term. Volumes and centiles correlated with maternal size and birth weight. Longitudinal scans showed individual fetal and placental trajectories closely following the normative curves, with greater variability in amniotic fluid. Preterm pregnancies showed significantly lower fetal, placental and amniotic fluid volumes and centiles than the controls with confirmed delivery at term. ConclusionThis study introduces an automated whole-uterus volumetry pipeline and corresponding normative 3-D MRI growth models. The method provides robust, standardised volumetric assessment of fetal, placental and amniotic fluid development and offers a practical tool for evaluating growth patterns in both normal and high-risk pregnancies.